I recently published results for Postgres 18 beta1 on a small server using the Insert Benchmark with a cached workload and low concurrency. Here I share results for it with an IO-bound workload.
tl;dr - for 17.5 vs 18 beta
- the write-heavy steps (l.i1, l.i2), are up to 5% slower in 18 beta1 vs 17.5
- the range query steps (qr100, qr500, qr1000) are up to 3% slower in 18 beta1 vs 17.5
- the point query steps (qp100, qp500, qp1000) are up to 2% slower in 18 beta1 vs 17.5
tl;dr for 14.0 through 18 beta1
- the write-heavy steps (l.i1, l.i2), are up to 15% slower in 18 beta1 vs 14.0
- the range query steps (qr100, qr500, qr1000) are up to 4% slower in 18 beta1 vs 14.0
- the point query steps (qp100, qp500, qp1000) are up to 1% faster in 18 beta1 vs 14.0
Builds, configuration and hardware
I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.
The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.
For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
- conf.diff.cx10b_c8r32
- uses io_method='sync' to match Postgres 17 behavior
- conf.diff.cx10c_c8r32
- uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
- conf.diff.cx10d_c8r32
- uses io_method='io_uring' to do async IO via io_uring
The Benchmark
The benchmark is explained here and is run with 1 client and 1 table with 800M rows.
The benchmark steps are:
- l.i0
- insert 800 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
- Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview
The performance reports are here for:
The summary sections linked above from the performance report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.
Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from either 14.0 or 17.5.
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.
Results: 17.5 and 18 beta1
The performance summary is here.
Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.
The summary of the summary is:
Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.
The summary of the summary is:
- the write-heavy steps (l.i1, l.i2), are up to 5% slower in 18 beta1 vs 17.5
- the range query steps (qr100, qr500, qr1000) are up to 3% slower in 18 beta1 vs 17.5
- the point query steps (qp100, qp500, qp1000) are up to 2% slower in 18 beta1 vs 17.5
The summary is:
- the initial load step (l.i0)
- rQPS is (1.00, 0.99, 1.00) with io_method= (sync, worker, io_uring) vs 17.5
- the create index step (l.x)
- rQPS is (1.00, 1.02, 1.00) with io_method= (sync, worker, io_uring) vs 17.5
- the write-heavy steps (l.i1, l.i2)
- rQPS is (0.95, 0.98) in 18 beta1 with io_method=sync vs 17.5
- rQPS is (0.98, 0.96) in 18 beta1 with io_method=worker vs 17.5
- rQPS is (0.99, 0.98) in 18 beta1 with io_method=io_uring vs 17.5
- the range query steps (qr100, qr500, qr1000)
- rQPS is (0.98, 0.97, 0.98) in 18 beta1 with io_method=sync vs 17.5
- rQPS is (0.99, 0.97, 0.97) in 18 beta1 with io_method=worker vs 17.5
- rQPS is (0.99, 0.99, 0.99) in 18 beta1 with io_method=io_uring vs 17.5
- the point query steps (qp100, qp500, qp1000)
- rQPS is (1.00, 1.00, 0.99) in 18 beta1 with io_method=sync vs 17.5
- rQPS is (0.99, 0.99, 0.98) in 18 beta1 with io_method=worker vs 17.5
- rQPS is (1.00, 0.99. 0.98) in 18 beta1 with io_method=io_uring vs 17.5
The regressions in the write-heavy steps (l.i1, l.i2) might be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation). Otherwise, the vmstat and iostat metrics, when divided by throughput, look similar. From the throughput vs time charts, the performance bottleneck was the response time for deletes.
The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100, qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.
The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100, qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.
Results: 14.0 through 18 beta1
The performance summary is here.
Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.
The summary of the summary is:
Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.
The summary of the summary is:
- the write-heavy steps (l.i1, l.i2), are up to 15% slower in 18 beta1 vs 14.0
- the range query steps (qr100, qr500, qr1000) are up to 4% slower in 18 beta1 vs 14.0
- the point query steps (qp100, qp500, qp1000) are up to 1% faster in 18 beta1 vs 14.0
Comparing 18 beta1 with io_method=sync vs 14.0
- the initial load step (l.i0)
- rQPS is 1.01 for 18 beta1 vs 17.5
- the create index step (l.x)
- rQPS is 1.14 for 18 beta1 vs 17.5
- the write-heavy steps (l.i1, l.i2)
- rQPS is (0.87, 0.85) for 18 beta1 vs 17.5
- Regressions for these steps are not new, they started in the 14.x releases
- the range query steps (qr100, qr500, qr1000)
- rQPS is (0.99, 0.98, 0.96) for 18 beta1 vs 17.5
- the point query steps (qp100, qp500, qp1000)
- rQPS is (1.01, 1.00, 1.00) for 18 beta1 vs 17.5
The regressions in the write-heavy steps (l.i1, l.i2) might be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation). Otherwise, the vmstat and iostat metrics, when divided by throughput, look similar. From the throughput vs time charts, the performance bottleneck was the response time for deletes.
The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100, qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.
The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100, qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.
No comments:
Post a Comment